You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.


Assignment 4 - Understanding and Predicting Property Maintenance Fines

This assignment is based on a data challenge from the Michigan Data Science Team (MDST).

The Michigan Data Science Team (MDST) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. Blight violations are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?

The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.

All data for this assignment has been provided to us through the Detroit Open Data Portal. Only the data already included in your Coursera directory can be used for training the model for this assignment. Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:


We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.

Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.


File descriptions (Use only this data for training your model!)

train.csv - the training set (all tickets issued 2004-2011)
test.csv - the test set (all tickets issued 2012-2016)
addresses.csv & latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. 
 Note: misspelled addresses may be incorrectly geolocated.


Data fields

train.csv & test.csv

ticket_id - unique identifier for tickets
agency_name - Agency that issued the ticket
inspector_name - Name of inspector that issued the ticket
violator_name - Name of the person/organization that the ticket was issued to
violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred
mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator
ticket_issued_date - Date and time the ticket was issued
hearing_date - Date and time the violator's hearing was scheduled
violation_code, violation_description - Type of violation
disposition - Judgment and judgement type
fine_amount - Violation fine amount, excluding fees
admin_fee - $20 fee assigned to responsible judgments

state_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations

train.csv only

payment_amount - Amount paid, if any
payment_date - Date payment was made, if it was received
payment_status - Current payment status as of Feb 1 2017
balance_due - Fines and fees still owed
collection_status - Flag for payments in collections
compliance [target variable for prediction] 
 Null = Not responsible
 0 = Responsible, non-compliant
 1 = Responsible, compliant
compliance_detail - More information on why each ticket was marked compliant or non-compliant



Evaluation

Your predictions will be given as the probability that the corresponding blight ticket will be paid on time.

The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).

Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.


For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using train.csv. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from test.csv will be paid, and the index being the ticket_id.

Example:

ticket_id
   284932    0.531842
   285362    0.401958
   285361    0.105928
   285338    0.018572
             ...
   376499    0.208567
   376500    0.818759
   369851    0.018528
   Name: compliance, dtype: float32

In [1]:
import pandas as pd
import numpy as np

def blight_model():
    
    # Your code here
    
    return # Your answer here

In [2]:
df_train = pd.read_csv('train.csv', encoding = "ISO-8859-1")
df_test = pd.read_csv('test.csv', encoding = "ISO-8859-1")

df_train.columns


/Users/fuyangliu/Workspace/coursera-Applied-Machine-Learning-in-Python/venv/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (11,12,31) have mixed types. Specify dtype option on import or set low_memory=False.
  interactivity=interactivity, compiler=compiler, result=result)
Out[2]:
Index(['ticket_id', 'agency_name', 'inspector_name', 'violator_name',
       'violation_street_number', 'violation_street_name',
       'violation_zip_code', 'mailing_address_str_number',
       'mailing_address_str_name', 'city', 'state', 'zip_code',
       'non_us_str_code', 'country', 'ticket_issued_date', 'hearing_date',
       'violation_code', 'violation_description', 'disposition', 'fine_amount',
       'admin_fee', 'state_fee', 'late_fee', 'discount_amount',
       'clean_up_cost', 'judgment_amount', 'payment_amount', 'balance_due',
       'payment_date', 'payment_status', 'collection_status',
       'grafitti_status', 'compliance_detail', 'compliance'],
      dtype='object')

In [3]:
list_to_remove = ['balance_due',
 'collection_status',
 'compliance_detail',
 'payment_amount',
 'payment_date',
 'payment_status']

list_to_remove_all = ['violator_name', 'zip_code', 'country', 'city',
                      'inspector_name', 'violation_street_number', 'violation_street_name',
                      'violation_zip_code', 'violation_description',
                      'mailing_address_str_number', 'mailing_address_str_name',
                      'non_us_str_code',
                      'ticket_issued_date', 'hearing_date']

In [4]:
df_train.drop(list_to_remove, axis=1, inplace=True)
df_train.drop(list_to_remove_all, axis=1, inplace=True)
df_test.drop(list_to_remove_all, axis=1, inplace=True)

df_train.drop('grafitti_status', axis=1, inplace=True)
df_test.drop('grafitti_status', axis=1, inplace=True)

In [5]:
df_train.head()


Out[5]:
ticket_id agency_name state violation_code disposition fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance
0 22056 Buildings, Safety Engineering & Env Department IL 9-1-36(a) Responsible by Default 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0
1 27586 Buildings, Safety Engineering & Env Department MI 61-63.0600 Responsible by Determination 750.0 20.0 10.0 75.0 0.0 0.0 855.0 1.0
2 22062 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN
3 22084 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by City Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN
4 22093 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN

In [6]:
df_train.violation_code.unique().size


Out[6]:
235

In [7]:
df_train.disposition.unique().size


Out[7]:
9

In [8]:
df_latlons = pd.read_csv('latlons.csv')

In [9]:
df_latlons.head()


Out[9]:
address lat lon
0 4300 rosa parks blvd, Detroit MI 48208 42.346169 -83.079962
1 14512 sussex, Detroit MI 42.394657 -83.194265
2 3456 garland, Detroit MI 42.373779 -82.986228
3 5787 wayburn, Detroit MI 42.403342 -82.957805
4 5766 haverhill, Detroit MI 42.407255 -82.946295

In [10]:
df_address =  pd.read_csv('addresses.csv')
df_address.head()


Out[10]:
ticket_id address
0 22056 2900 tyler, Detroit MI
1 27586 4311 central, Detroit MI
2 22062 1449 longfellow, Detroit MI
3 22084 1441 longfellow, Detroit MI
4 22093 2449 churchill, Detroit MI

In [11]:
df_id_latlons = df_address.set_index('address').join(df_latlons.set_index('address'))

In [12]:
df_id_latlons.head()


Out[12]:
ticket_id lat lon
address
-11064 gratiot, Detroit MI 328722 42.406935 -82.995599
-11871 wilfred, Detroit MI 350971 42.411288 -82.993674
-15126 harper, Detroit MI 344821 42.406402 -82.957525
0 10th st, Detroit MI 24928 42.325689 -83.064330
0 10th st, Detroit MI 71887 42.325689 -83.064330

In [13]:
df_train = df_train.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))
df_test = df_test.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))

In [14]:
df_train.head()


Out[14]:
agency_name state violation_code disposition fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance lat lon
ticket_id
22056 Buildings, Safety Engineering & Env Department IL 9-1-36(a) Responsible by Default 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.390729 -83.124268
27586 Buildings, Safety Engineering & Env Department MI 61-63.0600 Responsible by Determination 750.0 20.0 10.0 75.0 0.0 0.0 855.0 1.0 42.326937 -83.135118
22062 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.380516 -83.096069
22084 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by City Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.380570 -83.095919
22093 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.145257 -83.208233

In [15]:
df_train.agency_name.value_counts()


Out[15]:
Buildings, Safety Engineering & Env Department    157784
Department of Public Works                         74717
Health Department                                   8903
Detroit Police Department                           8900
Neighborhood City Halls                                2
Name: agency_name, dtype: int64

In [16]:
# df_train.country.value_counts()
# so we remove zip code and country as well

In [17]:
vio_code_freq10 = df_train.violation_code.value_counts().index[0:10]
vio_code_freq10


Out[17]:
Index(['9-1-36(a)', '9-1-81(a)', '22-2-88', '9-1-104', '22-2-88(b)', '22-2-45',
       '9-1-43(a) - (Dwellin', '9-1-105', '9-1-110(a)', '22-2-22'],
      dtype='object')

In [18]:
df_train['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_train.violation_code ]

In [19]:
df_train.head()


Out[19]:
agency_name state violation_code disposition fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance lat lon violation_code_freq10
ticket_id
22056 Buildings, Safety Engineering & Env Department IL 9-1-36(a) Responsible by Default 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.390729 -83.124268 0
27586 Buildings, Safety Engineering & Env Department MI 61-63.0600 Responsible by Determination 750.0 20.0 10.0 75.0 0.0 0.0 855.0 1.0 42.326937 -83.135118 -1
22062 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.380516 -83.096069 0
22084 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by City Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.380570 -83.095919 0
22093 Buildings, Safety Engineering & Env Department MI 9-1-36(a) Not responsible by Dismissal 250.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN 42.145257 -83.208233 0

In [20]:
df_train.violation_code_freq10.value_counts()


Out[20]:
 0    99091
 1    43471
 2    28720
-1    24883
 3    22536
 4     7238
 5     5394
 6     5332
 7     5072
 8     4814
 9     3755
Name: violation_code_freq10, dtype: int64

In [21]:
# drop violation code

df_train.drop('violation_code', axis=1, inplace=True)

df_test['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_test.violation_code ]
df_test.drop('violation_code', axis=1, inplace=True)

In [22]:
#df_train.grafitti_status.fillna('None', inplace=True)
#df_test.grafitti_status.fillna('None', inplace=True)

In [23]:
df_train = df_train[df_train.compliance.isnull() == False]

In [24]:
df_train.isnull().sum()


Out[24]:
agency_name               0
state                    84
disposition               0
fine_amount               0
admin_fee                 0
state_fee                 0
late_fee                  0
discount_amount           0
clean_up_cost             0
judgment_amount           0
compliance                0
lat                       2
lon                       2
violation_code_freq10     0
dtype: int64

In [25]:
df_test.isnull().sum()


Out[25]:
agency_name                0
state                    331
disposition                0
fine_amount                0
admin_fee                  0
state_fee                  0
late_fee                   0
discount_amount            0
clean_up_cost              0
judgment_amount            0
lat                        5
lon                        5
violation_code_freq10      0
dtype: int64

In [26]:
df_train.lat.fillna(method='pad', inplace=True)
df_train.lon.fillna(method='pad', inplace=True)
df_train.state.fillna(method='pad', inplace=True)

df_test.lat.fillna(method='pad', inplace=True)
df_test.lon.fillna(method='pad', inplace=True)
df_test.state.fillna(method='pad', inplace=True)

In [27]:
df_train.isnull().sum().sum()


Out[27]:
0

In [28]:
df_test.isnull().sum().sum()


Out[28]:
0


In [29]:
df_train.head()


Out[29]:
agency_name state disposition fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance lat lon violation_code_freq10
ticket_id
22056 Buildings, Safety Engineering & Env Department IL Responsible by Default 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.390729 -83.124268 0
27586 Buildings, Safety Engineering & Env Department MI Responsible by Determination 750.0 20.0 10.0 75.0 0.0 0.0 855.0 1.0 42.326937 -83.135118 -1
22046 Buildings, Safety Engineering & Env Department CA Responsible by Default 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.145257 -83.208233 0
18738 Buildings, Safety Engineering & Env Department MI Responsible by Default 750.0 20.0 10.0 75.0 0.0 0.0 855.0 0.0 42.433466 -83.023493 -1
18735 Buildings, Safety Engineering & Env Department MI Responsible by Default 100.0 20.0 10.0 10.0 0.0 0.0 140.0 0.0 42.388641 -83.037858 -1

In [30]:
one_hot_encode_columns = ['agency_name', 'state', 'disposition', 'violation_code_freq10']

In [31]:
[ df_train[c].unique().size for c in one_hot_encode_columns]


Out[31]:
[5, 59, 4, 11]

In [32]:
# So remove city and states...

In [33]:
one_hot_encode_columns = ['agency_name', 'state', 'disposition', 'violation_code_freq10']

df_train = pd.get_dummies(df_train, columns=one_hot_encode_columns)
df_test = pd.get_dummies(df_test, columns=one_hot_encode_columns)

In [34]:
df_train.head()


Out[34]:
fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance lat lon ... violation_code_freq10_0 violation_code_freq10_1 violation_code_freq10_2 violation_code_freq10_3 violation_code_freq10_4 violation_code_freq10_5 violation_code_freq10_6 violation_code_freq10_7 violation_code_freq10_8 violation_code_freq10_9
ticket_id
22056 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.390729 -83.124268 ... 1 0 0 0 0 0 0 0 0 0
27586 750.0 20.0 10.0 75.0 0.0 0.0 855.0 1.0 42.326937 -83.135118 ... 0 0 0 0 0 0 0 0 0 0
22046 250.0 20.0 10.0 25.0 0.0 0.0 305.0 0.0 42.145257 -83.208233 ... 1 0 0 0 0 0 0 0 0 0
18738 750.0 20.0 10.0 75.0 0.0 0.0 855.0 0.0 42.433466 -83.023493 ... 0 0 0 0 0 0 0 0 0 0
18735 100.0 20.0 10.0 10.0 0.0 0.0 140.0 0.0 42.388641 -83.037858 ... 0 0 0 0 0 0 0 0 0 0

5 rows × 89 columns

Train, keep, test split


In [35]:
from sklearn.model_selection import train_test_split
train_features = df_train.columns.drop('compliance')
train_features


Out[35]:
Index(['fine_amount', 'admin_fee', 'state_fee', 'late_fee', 'discount_amount',
       'clean_up_cost', 'judgment_amount', 'lat', 'lon',
       'agency_name_Buildings, Safety Engineering & Env Department',
       'agency_name_Department of Public Works',
       'agency_name_Detroit Police Department',
       'agency_name_Health Department', 'agency_name_Neighborhood City Halls',
       'state_AK', 'state_AL', 'state_AR', 'state_AZ', 'state_BC', 'state_BL',
       'state_CA', 'state_CO', 'state_CT', 'state_DC', 'state_DE', 'state_FL',
       'state_GA', 'state_HI', 'state_IA', 'state_ID', 'state_IL', 'state_IN',
       'state_KS', 'state_KY', 'state_LA', 'state_MA', 'state_MD', 'state_ME',
       'state_MI', 'state_MN', 'state_MO', 'state_MS', 'state_MT', 'state_NB',
       'state_NC', 'state_ND', 'state_NH', 'state_NJ', 'state_NM', 'state_NV',
       'state_NY', 'state_OH', 'state_OK', 'state_ON', 'state_OR', 'state_PA',
       'state_PR', 'state_QC', 'state_QL', 'state_RI', 'state_SC', 'state_SD',
       'state_TN', 'state_TX', 'state_UK', 'state_UT', 'state_VA', 'state_VI',
       'state_VT', 'state_WA', 'state_WI', 'state_WV', 'state_WY',
       'disposition_Responsible (Fine Waived) by Deter',
       'disposition_Responsible by Admission',
       'disposition_Responsible by Default',
       'disposition_Responsible by Determination', 'violation_code_freq10_-1',
       'violation_code_freq10_0', 'violation_code_freq10_1',
       'violation_code_freq10_2', 'violation_code_freq10_3',
       'violation_code_freq10_4', 'violation_code_freq10_5',
       'violation_code_freq10_6', 'violation_code_freq10_7',
       'violation_code_freq10_8', 'violation_code_freq10_9'],
      dtype='object')

In [36]:
X_data, X_keep, y_data, y_keep = train_test_split(df_train[train_features], 
                                                    df_train.compliance, 
                                                    random_state=0,
                                                    test_size=0.05)

In [37]:
print(X_data.shape, X_keep.shape)


(151886, 88) (7994, 88)

In [38]:
X_train, X_test, y_train, y_test = train_test_split(X_data[train_features], 
                                                    y_data, 
                                                    random_state=0,
                                                    test_size=0.2)

In [39]:
print(X_train.shape, X_test.shape)


(121508, 88) (30378, 88)

Train a NeuralNet and see the performance


In [40]:
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

clf = MLPClassifier(hidden_layer_sizes = [100], alpha = 5,
                    random_state = 0,
                    shuffle = True,
                    learning_rate_init = 0.001,
                    batch_size = 200,
                    learning_rate = 'adaptive',
                    verbose = True,
                    solver='sgd')
clf.fit(X_train_scaled, y_train)
print(clf.loss_)


Iteration 1, loss = 1.33222901
Iteration 2, loss = 1.02134640
Iteration 3, loss = 0.81932192
Iteration 4, loss = 0.67097577
Iteration 5, loss = 0.56191809
Iteration 6, loss = 0.48163588
Iteration 7, loss = 0.42246590
Iteration 8, loss = 0.37881427
Iteration 9, loss = 0.34660461
Iteration 10, loss = 0.32283019
Iteration 11, loss = 0.30525833
Iteration 12, loss = 0.29227551
Iteration 13, loss = 0.28266758
Iteration 14, loss = 0.27554787
Iteration 15, loss = 0.27027879
Iteration 16, loss = 0.26636947
Iteration 17, loss = 0.26346727
Iteration 18, loss = 0.26131073
Iteration 19, loss = 0.25970108
Iteration 20, loss = 0.25850321
Iteration 21, loss = 0.25760909
Iteration 22, loss = 0.25693087
Iteration 23, loss = 0.25642686
Iteration 24, loss = 0.25603736
Iteration 25, loss = 0.25575373
Iteration 26, loss = 0.25552523
Iteration 27, loss = 0.25534752
Iteration 28, loss = 0.25522620
Iteration 29, loss = 0.25512307
Iteration 30, loss = 0.25504225
Iteration 31, loss = 0.25498360
Iteration 32, loss = 0.25492646
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Setting learning rate to 0.000200
Iteration 33, loss = 0.25489744
Iteration 34, loss = 0.25488791
Iteration 35, loss = 0.25487978
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Setting learning rate to 0.000040
Iteration 36, loss = 0.25487354
Iteration 37, loss = 0.25487200
Iteration 38, loss = 0.25487054
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Setting learning rate to 0.000008
Iteration 39, loss = 0.25486933
Iteration 40, loss = 0.25486903
Iteration 41, loss = 0.25486877
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Setting learning rate to 0.000002
Iteration 42, loss = 0.25486849
Iteration 43, loss = 0.25486843
Iteration 44, loss = 0.25486838
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Setting learning rate to 0.000000
Iteration 45, loss = 0.25486833
Iteration 46, loss = 0.25486832
Iteration 47, loss = 0.25486830
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Learning rate too small. Stopping.
0.254868304804

In [41]:
clf.score(X_train_scaled, y_train)


Out[41]:
0.92750271587055999

In [42]:
clf.score(X_test_scaled, y_test)


Out[42]:
0.92734873921917171

In [43]:
from sklearn.metrics import recall_score, precision_score, f1_score

train_pred = clf.predict(X_train_scaled)
print(precision_score(y_train, train_pred),
      recall_score(y_train, train_pred),
      f1_score(y_train, train_pred))


0.0 0.0 0.0
/Users/fuyangliu/Workspace/coursera-Applied-Machine-Learning-in-Python/venv/lib/python3.6/site-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples.
  'precision', 'predicted', average, warn_for)
/Users/fuyangliu/Workspace/coursera-Applied-Machine-Learning-in-Python/venv/lib/python3.6/site-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
  'precision', 'predicted', average, warn_for)

Tackel Skewed Data - putting fewer class ahead (Not really working...)


In [44]:
idx = y_train.sort_values(ascending=False).index

X_train = X_train.loc[idx]
y_train = y_train.loc[idx]

In [45]:
y_train[1:10]


Out[45]:
ticket_id
135795    1.0
207246    1.0
19018     1.0
30723     1.0
24374     1.0
217785    1.0
158970    1.0
180674    1.0
276618    1.0
Name: compliance, dtype: float64

In [46]:
X_train[1:10]


Out[46]:
fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount lat lon agency_name_Buildings, Safety Engineering & Env Department ... violation_code_freq10_0 violation_code_freq10_1 violation_code_freq10_2 violation_code_freq10_3 violation_code_freq10_4 violation_code_freq10_5 violation_code_freq10_6 violation_code_freq10_7 violation_code_freq10_8 violation_code_freq10_9
ticket_id
135795 300.0 20.0 10.0 30.0 0.0 0.0 360.0 42.414985 -83.069262 1 ... 0 0 0 0 0 0 1 0 0 0
207246 50.0 20.0 10.0 5.0 0.0 0.0 85.0 42.332048 -83.047571 1 ... 0 0 0 0 0 0 0 0 0 0
19018 250.0 20.0 10.0 25.0 0.0 0.0 305.0 42.432577 -83.085659 1 ... 1 0 0 0 0 0 0 0 0 0
30723 250.0 20.0 10.0 0.0 0.0 0.0 280.0 42.379930 -83.084925 1 ... 1 0 0 0 0 0 0 0 0 0
24374 100.0 20.0 10.0 10.0 0.0 0.0 140.0 42.347130 -83.235799 0 ... 0 0 0 0 0 0 0 0 0 1
217785 100.0 20.0 10.0 10.0 0.0 0.0 140.0 42.353718 -83.206075 0 ... 0 0 0 0 0 1 0 0 0 0
158970 100.0 20.0 10.0 0.0 0.0 0.0 130.0 42.417123 -83.061637 0 ... 0 0 0 0 0 0 0 0 0 0
180674 50.0 20.0 10.0 0.0 0.0 0.0 80.0 42.409255 -82.955282 0 ... 0 0 0 1 0 0 0 0 0 0
276618 100.0 20.0 10.0 10.0 0.0 0.0 140.0 42.379975 -83.210198 0 ... 0 0 0 0 0 0 0 1 0 0

9 rows × 88 columns


In [47]:
X_train.fine_amount.value_counts()


Out[47]:
250.0      65882
50.0       15406
100.0      11804
200.0       9730
500.0       5266
1000.0      3785
3500.0      2941
300.0       2891
2500.0      1169
25.0        1081
125.0        592
1500.0       207
750.0        183
0.0          149
10000.0      140
350.0         98
5000.0        69
400.0         30
1200.0        29
7000.0        12
2000.0        12
600.0          9
220.0          4
3000.0         4
160.0          2
1250.0         2
150.0          2
1750.0         1
20.0           1
270.0          1
677.0          1
1.0            1
655.0          1
970.0          1
1030.0         1
2695.0         1
Name: fine_amount, dtype: int64

In [48]:
X_train.late_fee.value_counts()


Out[48]:
25.0      60462
5.0       13299
0.0       11729
10.0       9540
20.0       8697
50.0       4971
100.0      3662
350.0      2888
30.0       2827
250.0      1141
2.5         969
12.5        539
150.0       198
75.0        173
1000.0      140
35.0         97
500.0        69
120.0        29
40.0         28
700.0        12
200.0        11
60.0          9
300.0         4
22.0          3
16.0          2
125.0         2
15.0          1
175.0         1
67.7          1
269.5         1
0.1           1
65.5          1
97.0          1
Name: late_fee, dtype: int64

In [49]:
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

In [50]:
X_train_scaled[0]


Out[50]:
array([ 0.025     ,  0.        ,  0.        ,  0.025     ,  0.        ,
        0.        ,  0.02765186,  0.07887548,  0.56369222,  1.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  1.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  1.        ,  0.        ,  1.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ])

In [51]:
y_train.sum()


Out[51]:
8809.0

In [52]:
# X_train = X_train[0:17000]
# y_train = y_train[0:17000]

In [107]:
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

clf = MLPClassifier(hidden_layer_sizes = [800, 300, 100], alpha = 0.00,
                    random_state = 0,
                    shuffle = False,
                    learning_rate_init = 0.05,
                    batch_size = 4000,
                    learning_rate = 'invscaling',
                    momentum = 0.2,
                    power_t = 0.01,
                    verbose = True,
                    max_iter = 400,
                    tol = 0.00001,
                    solver='lbfgs')
# clf.fit(X_train_scaled, y_train) # takes a long time to train
print(clf.loss_)


0.196058796195

In [156]:
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

clf = MLPClassifier(hidden_layer_sizes = [500, 50], alpha = 0.00,
                    random_state = 0,
                    shuffle = False,
                    learning_rate_init = 0.005,
                    batch_size = 1000,
                    learning_rate = 'invscaling',
                    momentum = 0.1,
                    power_t = 0.001,
                    verbose = True,
                    max_iter = 400,
                    tol = 0.0001,
                    solver='adam')
clf.fit(X_train_scaled, y_train)
print(clf.loss_)


Iteration 1, loss = 0.22494615
Iteration 2, loss = 0.21005819
Iteration 3, loss = 0.20703463
Iteration 4, loss = 0.20489479
Iteration 5, loss = 0.20432123
Iteration 6, loss = 0.20124023
Iteration 7, loss = 0.20001749
Iteration 8, loss = 0.19968353
Iteration 9, loss = 0.19816400
Iteration 10, loss = 0.19837893
Iteration 11, loss = 0.19676730
Iteration 12, loss = 0.19654055
Iteration 13, loss = 0.19579393
Iteration 14, loss = 0.19367073
Iteration 15, loss = 0.19463412
Iteration 16, loss = 0.19290120
Iteration 17, loss = 0.19236184
Iteration 18, loss = 0.19306091
Iteration 19, loss = 0.19208600
Iteration 20, loss = 0.19169312
Iteration 21, loss = 0.19182803
Iteration 22, loss = 0.19122741
Iteration 23, loss = 0.19130063
Iteration 24, loss = 0.19129424
Iteration 25, loss = 0.19092781
Iteration 26, loss = 0.19110565
Iteration 27, loss = 0.19029658
Iteration 28, loss = 0.19025961
Iteration 29, loss = 0.19050876
Iteration 30, loss = 0.19015408
Iteration 31, loss = 0.18992785
Iteration 32, loss = 0.18953857
Iteration 33, loss = 0.18937564
Iteration 34, loss = 0.18946347
Iteration 35, loss = 0.18959296
Iteration 36, loss = 0.18887185
Iteration 37, loss = 0.18928962
Iteration 38, loss = 0.18893568
Iteration 39, loss = 0.18898488
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping.
0.188984879076

In [140]:
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

clf = MLPClassifier(hidden_layer_sizes = [100], alpha = 0.00,
                    random_state = 0,
                    shuffle = False,
                    learning_rate_init = 0.05,
                    batch_size = 1000,
                    learning_rate = 'invscaling',
                    momentum = 0.5,
                    power_t = 0.01,
                    verbose = True,
                    max_iter = 400,
                    tol = 0.0001,
                    solver='sgd')
clf.fit(X_train_scaled, y_train)
print(clf.loss_)


Iteration 1, loss = 0.25462873
Iteration 2, loss = 0.22491943
Iteration 3, loss = 0.22129263
Iteration 4, loss = 0.22015344
Iteration 5, loss = 0.21967639
Iteration 6, loss = 0.21933914
Iteration 7, loss = 0.21906955
Iteration 8, loss = 0.21883661
Iteration 9, loss = 0.21861575
Iteration 10, loss = 0.21841461
Iteration 11, loss = 0.21823599
Iteration 12, loss = 0.21804616
Iteration 13, loss = 0.21791446
Iteration 14, loss = 0.21775736
Iteration 15, loss = 0.21760938
Iteration 16, loss = 0.21747069
Iteration 17, loss = 0.21734777
Iteration 18, loss = 0.21722110
Iteration 19, loss = 0.21710620
Iteration 20, loss = 0.21698475
Iteration 21, loss = 0.21687799
Iteration 22, loss = 0.21678495
Iteration 23, loss = 0.21667511
Iteration 24, loss = 0.21658651
Iteration 25, loss = 0.21649428
Iteration 26, loss = 0.21638733
Iteration 27, loss = 0.21631963
Iteration 28, loss = 0.21624300
Iteration 29, loss = 0.21615635
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping.
0.216156351718

In [157]:
clf.score(X_train_scaled, y_train)


Out[157]:
0.94395430753530629

In [158]:
clf.score(X_test_scaled, y_test)


Out[158]:
0.9419974981894792

In [159]:
from sklearn.metrics import recall_score, precision_score, f1_score

train_pred = clf.predict(X_train_scaled)
print(precision_score(y_train, train_pred),
      recall_score(y_train, train_pred),
      f1_score(y_train, train_pred))


0.920134510298 0.24849585651 0.391312120129

In [160]:
from sklearn.metrics import recall_score, precision_score, f1_score

test_pred = clf.predict(X_test_scaled)
print(precision_score(y_test, test_pred),
      recall_score(y_test, test_pred),
      f1_score(y_test, test_pred))


0.879045996593 0.233801540553 0.369362920544

In [161]:
test_pro = clf.predict_proba(X_test_scaled)

In [162]:
def draw_roc_curve():
    %matplotlib notebook
    import matplotlib.pyplot as plt
    from sklearn.metrics import roc_curve, auc

    fpr_lr, tpr_lr, _ = roc_curve(y_test, test_pro[:,1])
    roc_auc_lr = auc(fpr_lr, tpr_lr)

    plt.figure()
    plt.xlim([-0.01, 1.00])
    plt.ylim([-0.01, 1.01])
    plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
    plt.xlabel('False Positive Rate', fontsize=16)
    plt.ylabel('True Positive Rate', fontsize=16)
    plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
    plt.legend(loc='lower right', fontsize=13)
    plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
    plt.axes().set_aspect('equal')
    plt.show()
    
draw_roc_curve()



In [164]:
from sklearn.metrics import recall_score, precision_score, f1_score

test_pred = clf.predict(X_test_scaled)
print(precision_score(y_test, test_pred),
      recall_score(y_test, test_pred),
      f1_score(y_test, test_pred))


0.879045996593 0.233801540553 0.369362920544

In [165]:
def draw_pr_curve():
    from sklearn.metrics import precision_recall_curve
    from sklearn.metrics import roc_curve, auc

    precision, recall, thresholds = precision_recall_curve(y_test, test_pro[:,1])
    print(len(thresholds))
    idx = min(range(len(thresholds)), key=lambda i: abs(thresholds[i]-0.5))
    print(idx)
    print(np.argmin(np.abs(thresholds)))
    
    closest_zero = idx # np.argmin(np.abs(thresholds))
    closest_zero_p = precision[closest_zero]
    closest_zero_r = recall[closest_zero]

    import matplotlib.pyplot as plt
    plt.figure()
    plt.xlim([0.0, 1.01])
    plt.ylim([0.0, 1.01])
    plt.plot(precision, recall, label='Precision-Recall Curve')
    plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
    plt.xlabel('Precision', fontsize=16)
    plt.ylabel('Recall', fontsize=16)
    plt.axes().set_aspect('equal')
    plt.show()
    
    return thresholds

thresholds = draw_pr_curve()


28428
27847
0

In [166]:
import matplotlib.pyplot as plt
%matplotlib notebook
plt.plot(thresholds)
plt.show()


Conflusion: Putting few class ahead seems won't really work

But if reduce the amount of other class, the result changes. What if we just duplicate those fewer classes?


In [167]:
df_train.sort_values('compliance', ascending=False, inplace=True)

In [168]:
df_train.compliance.sum()


Out[168]:
11597.0

In [170]:
df_train.shape


Out[170]:
(159880, 89)

In [172]:
df_train[11595:11599]


Out[172]:
fine_amount admin_fee state_fee late_fee discount_amount clean_up_cost judgment_amount compliance lat lon ... violation_code_freq10_0 violation_code_freq10_1 violation_code_freq10_2 violation_code_freq10_3 violation_code_freq10_4 violation_code_freq10_5 violation_code_freq10_6 violation_code_freq10_7 violation_code_freq10_8 violation_code_freq10_9
ticket_id
284211 200.0 20.0 10.0 20.0 0.0 0.0 250.0 1.0 42.429360 -83.062484 ... 0 0 0 0 0 0 0 0 0 0
284177 200.0 20.0 10.0 0.0 0.0 0.0 230.0 1.0 42.424025 -83.038045 ... 0 0 0 0 0 0 0 0 0 0
194645 3500.0 20.0 10.0 350.0 0.0 0.0 3880.0 0.0 42.356791 -83.129201 ... 0 0 1 0 0 0 0 0 0 0
195112 200.0 20.0 10.0 20.0 0.0 0.0 250.0 0.0 42.401556 -83.181450 ... 0 0 1 0 0 0 0 0 0 0

4 rows × 89 columns


In [175]:
df_train = df_train.append([df_train[0:11597]]*10,ignore_index=True)

In [176]:
df_train.shape


Out[176]:
(275850, 89)

In [178]:
df_train.compliance.sum()


Out[178]:
127567.0

In [179]:
X_data, X_keep, y_data, y_keep = train_test_split(df_train[train_features], 
                                                    df_train.compliance, 
                                                    random_state=0,
                                                    test_size=0.05)

X_train, X_test, y_train, y_test = train_test_split(X_data[train_features], 
                                                    y_data, 
                                                    random_state=0,
                                                    test_size=0.2)
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

In [180]:
from sklearn.neural_network import MLPClassifier

clf = MLPClassifier(hidden_layer_sizes = [500, 50], alpha = 0.00,
                    random_state = 0,
                    shuffle = False,
                    learning_rate_init = 0.005,
                    batch_size = 1000,
                    learning_rate = 'invscaling',
                    momentum = 0.1,
                    power_t = 0.001,
                    verbose = True,
                    max_iter = 400,
                    tol = 0.0001,
                    solver='adam')
clf.fit(X_train_scaled, y_train)
print(clf.loss_)


Iteration 1, loss = 0.54154951
Iteration 2, loss = 0.52538852
Iteration 3, loss = 0.51723227
Iteration 4, loss = 0.51168551
Iteration 5, loss = 0.50815419
Iteration 6, loss = 0.50932443
Iteration 7, loss = 0.50404952
Iteration 8, loss = 0.50261796
Iteration 9, loss = 0.50179177
Iteration 10, loss = 0.50068578
Iteration 11, loss = 0.50034584
Iteration 12, loss = 0.49972383
Iteration 13, loss = 0.49883567
Iteration 14, loss = 0.49905254
Iteration 15, loss = 0.49841198
Iteration 16, loss = 0.49784664
Iteration 17, loss = 0.49759338
Iteration 18, loss = 0.49696183
Iteration 19, loss = 0.49606055
Iteration 20, loss = 0.49728161
Iteration 21, loss = 0.49611661
Iteration 22, loss = 0.49538626
Iteration 23, loss = 0.49448962
Iteration 24, loss = 0.49481198
Iteration 25, loss = 0.49434087
Iteration 26, loss = 0.49482888
Iteration 27, loss = 0.49384178
Iteration 28, loss = 0.49532573
Iteration 29, loss = 0.49393208
Iteration 30, loss = 0.49390311
Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping.
0.49390311328

In [183]:
from sklearn.metrics import recall_score, precision_score, f1_score, accuracy_score

train_pred = clf.predict(X_train_scaled)
print(accuracy_score(y_train, train_pred),
      precision_score(y_train, train_pred),
      recall_score(y_train, train_pred),
      f1_score(y_train, train_pred))

test_pred = clf.predict(X_test_scaled)
print(accuracy_score(y_test, test_pred),
      precision_score(y_test, test_pred),
      recall_score(y_test, test_pred),
      f1_score(y_test, test_pred))


0.762341100432 0.821855816688 0.620316385813 0.707003822405
0.757288407235 0.815264337652 0.614826420384 0.700998942296

In [184]:
test_pro = clf.predict_proba(X_test_scaled)

def draw_roc_curve():
    %matplotlib notebook
    import matplotlib.pyplot as plt
    from sklearn.metrics import roc_curve, auc

    fpr_lr, tpr_lr, _ = roc_curve(y_test, test_pro[:,1])
    roc_auc_lr = auc(fpr_lr, tpr_lr)

    plt.figure()
    plt.xlim([-0.01, 1.00])
    plt.ylim([-0.01, 1.01])
    plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
    plt.xlabel('False Positive Rate', fontsize=16)
    plt.ylabel('True Positive Rate', fontsize=16)
    plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
    plt.legend(loc='lower right', fontsize=13)
    plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
    plt.axes().set_aspect('equal')
    plt.show()
    
draw_roc_curve()



In [185]:
def draw_pr_curve():
    from sklearn.metrics import precision_recall_curve
    from sklearn.metrics import roc_curve, auc

    precision, recall, thresholds = precision_recall_curve(y_test, test_pro[:,1])
    print(len(thresholds))
    idx = min(range(len(thresholds)), key=lambda i: abs(thresholds[i]-0.5))
    print(idx)
    print(np.argmin(np.abs(thresholds)))
    
    closest_zero = idx # np.argmin(np.abs(thresholds))
    closest_zero_p = precision[closest_zero]
    closest_zero_r = recall[closest_zero]

    import matplotlib.pyplot as plt
    plt.figure()
    plt.xlim([0.0, 1.01])
    plt.ylim([0.0, 1.01])
    plt.plot(precision, recall, label='Precision-Recall Curve')
    plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
    plt.xlabel('Precision', fontsize=16)
    plt.ylabel('Recall', fontsize=16)
    plt.axes().set_aspect('equal')
    plt.show()
    
    return thresholds

thresholds = draw_pr_curve()


35266
25709
0

Resample differently.

Looks like an efficient way to handle the skewed data is to resample differenty:

Over-sample from your minority class and under-sample from your majority class, so you get a more balanced dataset.

https://www.quora.com/In-classification-how-do-you-handle-an-unbalanced-training-set

Let's try use this technich and upload code again see if the ROC score changes. Previsously we got 0.77 on the course assigment platform.

Check the notebook Assignment-4 Submit 2.

As the result shows: This won't change much on the AUC score

The Precision, Recall is changed for the resampled test data, but still stay the same for the unbalanced test data.

Interesting :)


In [ ]: